🎉 Multiple Papers of Our Team Have Been Accepted by CVPR 2022

📅 March 2, 2022
⏱️ 1 min read
CVPR 2022
CVPR 2022

We are thrilled to announce that CVPR 2022 has officially released the list of accepted papers, and multiple papers from our team are included! This is an outstanding achievement that showcases the exceptional research quality and innovation of our laboratory.

About CVPR 2022

The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) is the premier annual computer vision event, bringing together researchers and practitioners from around the world. CVPR is widely recognized as one of the top-tier conferences in computer vision and artificial intelligence, with extremely competitive acceptance rates.

Papers accepted by CVPR represent cutting-edge research that advances the state-of-the-art in computer vision, pattern recognition, and machine learning applications.

Paper 1: 基于样例查询机制的在线动作检测

Online Action Detection with Exemplar Query Mechanism

📄 基于样例查询机制的在线动作检测 (Online Action Detection with Exemplar Query)

Authors: Le Yang (杨乐), Junwei Han (韩军伟), Dingwen Zhang (张鼎文)

Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022

Research Background

Temporal action localization aims to discover meaningful action segments from lengthy videos and annotate their start and end times along with action categories. This task condenses effective information from videos, serving video understanding tasks such as intelligent surveillance and content analysis.

Considering practical applications where algorithms need to process video streams online and detect ongoing actions in videos timely and accurately, online action detection has emerged as a new research direction with significant real-world importance.

Key Innovation: Exemplar Query Mechanism

Existing methods only consider information from fixed historical segments and cannot model cross-video relationships. To address these limitations, this paper proposes an exemplar query mechanism:

Significance

This work provides a simple yet effective baseline model for future research in online action detection, offering both efficiency and accuracy improvements that are crucial for real-world video understanding applications.

Online Action Detection

Paper 2: 基于增量跨视图互蒸馏学习机制的CT影像生成

CT Image Synthesis via Incremental Cross-View Mutual Distillation

📄 基于增量跨视图互蒸馏学习机制的CT影像生成 (Incremental Cross-View Mutual Distillation for CT Synthesis)

Authors: Chaowei Fang (方超伟), Liang Wang (王良), Jun Xu (徐君), Yixuan Yuan (袁奕萱), Dingwen Zhang (张鼎文), Junwei Han (韩军伟)

Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022

CT Image Synthesis

Research Background

High-resolution CT images can help doctors and medical AI systems perform accurate imaging analysis and disease diagnosis. However, due to the characteristics of human body structure, it is difficult to obtain sufficiently high inter-slice resolution for CT images in the axial view.

Key Innovation: Incremental Cross-View Mutual Distillation

This paper constructs a self-supervised axial view CT slice generation method and proposes an incremental cross-view mutual distillation learning mechanism:

Clinical Significance

This research addresses a critical challenge in medical imaging by enabling:

CT Synthesis Results

Paper 3: 基于鲁棒区域特征生成的零样本目标检测

Zero-Shot Object Detection via Robust Region Feature Synthesis

📄 基于鲁棒区域特征生成的零样本目标检测 (Robust Region Feature Synthesizer for Zero-Shot Object Detection)

Authors: Peiliang Huang (黄培亮), Junwei Han (韩军伟), De Cheng (程德), Dingwen Zhang (张鼎文)

Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022

Research Background

Zero-shot object detection aims to enhance a model's ability to detect object classes that are not visible during the training phase. This is a crucial capability for building generalizable AI systems that can recognize novel objects without requiring extensive labeled training data.

Key Challenges

Traditional zero-shot learning models face significant difficulties in the object detection task:

Key Innovation: Robust Region Feature Synthesis

Fully considering the uniqueness of the object detection task, this research proposes to utilize the rich foreground and background region features contained in training images to simultaneously maintain intra-class diversity and inter-class discriminability of unseen object features:

Research Significance

This work makes several important contributions:

Zero-Shot Object Detection

Overall Impact

The acceptance of these three papers by CVPR 2022 demonstrates our laboratory's research excellence across multiple important areas:

These diverse research directions showcase our team's capability to address critical challenges in computer vision and its applications to real-world problems in surveillance, healthcare, and beyond.

Research Applications

The technologies developed in these papers have broad practical applications:

Conclusion

We are extremely proud of our team's achievement in having multiple papers accepted by CVPR 2022, one of the most prestigious and competitive conferences in computer vision. These acceptances reflect the high quality, innovation, and impact of our research.

The three accepted papers span diverse and important research areas, from real-time video understanding to medical image synthesis and zero-shot object detection. This diversity demonstrates our laboratory's comprehensive research capabilities and our commitment to advancing computer vision technology for real-world applications.

Congratulations to all team members involved in these outstanding research projects! 🎊